50 research outputs found

    Interactions between endocarditis-derived Streptococcus gallolyticus subsp. gallolyticus isolates and human endothelial cells

    Get PDF
    <p>Abstract</p> <p>Background</p> <p><it>Streptococcus gallolyticus </it>subsp. <it>gallolyticus </it>is an important causative agent of infective endocarditis (IE) but the knowledge on virulence factors is limited and the pathogenesis of the infection is poorly understood. In the present study, we established an experimental <it>in vitro </it>IE cell culture model using EA.hy926 and HUVEC cells to investigate the adhesion and invasion characteristics of 23 <it>Streptococcus gallolyticus </it>subsp. <it>gallolyticus </it>strains from different origins (human IE-derived isolates, other human clinical isolates, animal isolates). Adhesion to eight components of the extracellular matrix (ECM) and the ability to form biofilms <it>in vitro </it>was examined in order to reveal features of <it>S. gallolyticus </it>subsp. <it>gallolyticus </it>endothelial infection. In addition, the strains were analyzed for the presence of the three virulence factors <it>gtf</it>, <it>pilB</it>, and <it>fimB </it>by PCR.</p> <p>Results</p> <p>The adherence to and invasion characteristics of the examined <it>S. gallolyticus </it>subsp. <it>gallolyticus </it>strains to the endothelial cell line EA.hy926 differ significantly among themselves. In contrast, the usage of three different <it>in vitro </it>models (EA.hy926 cells, primary endothelial cells (HUVECs), mechanical stretched cells) revealed no differences regarding the adherence to and invasion characteristics of different strains. Adherence to the ECM proteins collagen I, II and IV revealed the highest values, followed by fibrinogen, tenascin and laminin. Moreover, a strong correlation was observed in binding to these proteins by the analyzed strains. All strains show the capability to adhere to polystyrole surfaces and form biofilms. We further confirmed the presence of the genes of two known virulence factors (<it>fimB</it>: all strains, <it>gtf</it>: 19 of 23 strains) and demonstrated the presence of the gene of one new putative virulence factor (<it>pilB</it>: 9 of 23 strains) by PCR.</p> <p>Conclusion</p> <p>Our study provides the first description of <it>S. gallolyticus </it>subsp. <it>gallolyticus </it>adhesion and invasion of human endothelial cells, revealing important initial information of strain variability, behaviour and characteristics of this as yet barely analyzed pathogen.</p

    Reference-guided Pseudo-Label Generation for Medical Semantic Segmentation

    Get PDF
    Producing densely annotated data is a difficult and tedious task for medical imaging applications. To address this prob- lem, we propose a novel approach to generate supervision for semi-supervised semantic segmentation. We argue that visu- ally similar regions between labeled and unlabeled images likely contain the same semantics and therefore should share their label. Following this thought, we use a small number of labeled images as reference material and match pixels in an unlabeled image to the semantics of the best fitting pixel in a reference set. This way, we avoid pitfalls such as confirma- tion bias, common in purely prediction-based pseudo-labeling. Since our method does not require any architectural changes or accompanying networks, one can easily insert it into existing frameworks. We achieve the same performance as a standard fully supervised model on X-ray anatomy segmentation, albeit 95% fewer labeled images. Aside from an in-depth analy- sis of different aspects of our proposed method, we further demonstrate the effectiveness of our reference-guided learning paradigm by comparing our approach against existing methods for retinal fluid segmentation with competitive performance as we improve upon recent work by up to 15% mean IoU

    Self-Guided Multiple Instance Learning for Weakly Supervised Thoracic DiseaseClassification and Localizationin Chest Radiographs

    Get PDF
    Due to the high complexity of medical images and the scarcity of trained personnel, most large-scale radiological datasets are lacking fine-grained annotations and are often only described on image-level. These shortcomings hinder the deployment of automated diagnosis systems, which require human-interpretable justification for their decision process. In this paper, we address the problem of weakly supervised identification and localization of abnormalities in chest radiographs in a multiple-instance learning setting. To that end, we introduce a novel loss function for training convolutional neural networks increasing the localization confidence and assisting the overall disease identification. The loss leverages both image-and patch-level predictions to generate auxiliary supervision and enables specific training at patch-level. Rather than forming strictly binary from the predictions as done in previous loss formulations, we create targets in a more customized manner. This way, the loss accounts for possible misclassification of less certain instances. We show that the supervision provided within the proposed learning scheme leads to better performance and more precise predictions on prevalent datasets for multiple-instance learning as well as on the NIH ChestX-Ray14 benchmark for disease recognition than previously used losses

    Self-Guided Multiple Instance Learning for Weakly Supervised Disease Classification and Localization in Chest Radiographs

    Full text link
    The lack of fine-grained annotations hinders the deployment of automated diagnosis systems, which require human-interpretable justification for their decision process. In this paper, we address the problem of weakly supervised identification and localization of abnormalities in chest radiographs. To that end, we introduce a novel loss function for training convolutional neural networks increasing the \emph{localization confidence} and assisting the overall \emph{disease identification}. The loss leverages both image- and patch-level predictions to generate auxiliary supervision. Rather than forming strictly binary from the predictions as done in previous loss formulations, we create targets in a more customized manner, which allows the loss to account for possible misclassification. We show that the supervision provided within the proposed learning scheme leads to better performance and more precise predictions on prevalent datasets for multiple-instance learning as well as on the NIH~ChestX-Ray14 benchmark for disease recognition than previously used losses

    Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training

    Get PDF
    When reading images, radiologists generate text reports describing the findings therein. Current state-of-the-art computer-aided diagnosis tools utilize a fixed set of predefined categories automatically extracted from these medical reports for training. This form of supervision limits the potential usage of models as they are unable to pick up on anomalies outside of their predefined set, thus, making it a necessity to retrain the classifier with additional data when faced with novel classes. In contrast, we investigate direct text supervision to break away from this closed set assumption. By doing so, we avoid noisy label extraction via text classifiers and incorporate more contextual information. We employ a contrastive global-local dual-encoder architecture to learn concepts directly from unstructured medical reports while maintaining its ability to perform free form classification. We investigate relevant properties of open set recognition for radiological data and propose a method to employ currently weakly annotated data into training. We evaluate our approach on the large-scale chest X-Ray datasets MIMIC-CXR, CheXpert, and ChestX-Ray14 for disease classification. We show that despite using unstructured medical report supervision, we perform on par with direct label supervision through a sophisticated inference setting

    Jointly Optimized Deep Neural Networks to Synthesize Monoenergetic Images from Single-Energy CT Angiography for Improving Classification of Pulmonary Embolism

    Get PDF
    Detector-based spectral CT offers the possibility of obtaining spectral information from which discrete acquisitions at different energy levels can be derived, yielding so-called virtual monoenergetic images (VMI). In this study, we aimed to develop a jointly optimized deep-learning framework based on dual-energy CT pulmonary angiography (DE-CTPA) data to generate synthetic monoenergetic images (SMI) for improving automatic pulmonary embolism (PE) detection in single-energy CTPA scans. For this purpose, we used two datasets: our institutional DE-CTPA dataset D1, comprising polyenergetic arterial series and the corresponding VMI at low-energy levels (40 keV) with 7892 image pairs, and a 10% subset of the 2020 RSNA Pulmonary Embolism CT Dataset D2, which consisted of 161,253 polyenergetic images with dichotomous slice-wise annotations (PE/no PE). We trained a fully convolutional encoder-decoder on D1 to generate SMI from single-energy CTPA scans of D2, which were then fed into a ResNet50 network for training of the downstream PE classification task. The quantitative results on the reconstruction ability of our framework revealed high-quality visual SMI predictions with reconstruction results of 0.984 ± 0.002 (structural similarity) and 41.706 ± 0.547 dB (peak signal-to-noise ratio). PE classification resulted in an AUC of 0.84 for our model, which achieved improved performance compared to other naïve approaches with AUCs up to 0.81. Our study stresses the role of using joint optimization strategies for deep-learning algorithms to improve automatic PE detection. The proposed pipeline may prove to be beneficial for

    Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training

    Full text link
    When reading images, radiologists generate text reports describing the findings therein. Current state-of-the-art computer-aided diagnosis tools utilize a fixed set of predefined categories automatically extracted from these medical reports for training. This form of supervision limits the potential usage of models as they are unable to pick up on anomalies outside of their predefined set, thus, making it a necessity to retrain the classifier with additional data when faced with novel classes. In contrast, we investigate direct text supervision to break away from this closed set assumption. By doing so, we avoid noisy label extraction via text classifiers and incorporate more contextual information. We employ a contrastive global-local dual-encoder architecture to learn concepts directly from unstructured medical reports while maintaining its ability to perform free form classification. We investigate relevant properties of open set recognition for radiological data and propose a method to employ currently weakly annotated data into training. We evaluate our approach on the large-scale chest X-Ray datasets MIMIC-CXR, CheXpert, and ChestX-Ray14 for disease classification. We show that despite using unstructured medical report supervision, we perform on par with direct label supervision through a sophisticated inference setting.Comment: Provisionally Accepted at MICCAI202

    Apple Vision Pro for Healthcare: "The Ultimate Display"? -- Entering the Wonderland of Precision Medicine

    Full text link
    At the Worldwide Developers Conference (WWDC) in June 2023, Apple introduced the Vision Pro. The Vision Pro is a Mixed Reality (MR) headset, more specifically it is a Virtual Reality (VR) device with an additional Video See-Through (VST) capability. The VST capability turns the Vision Pro also into an Augmented Reality (AR) device. The AR feature is enabled by streaming the real world via cameras to the (VR) screens in front of the user's eyes. This is of course not unique and similar to other devices, like the Varjo XR-3. Nevertheless, the Vision Pro has some interesting features, like an inside-out screen that can show the headset wearers' eyes to "outsiders" or a button on the top, called "Digital Crown", that allows you to seamlessly blend digital content with your physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to the "Ultimate Display", which Ivan Sutherland had already sketched in 1965. Not available to the public yet, like the Ultimate Display, we want to take a look into the crystal ball in this perspective to see if it can overcome some clinical challenges that - especially - AR still faces in the medical domain, but also go beyond and discuss if the Vision Pro could support clinicians in essential tasks to spend more time with their patients.Comment: This is a Preprint under CC BY. This work was supported by NIH/NIAID R01AI172875, NIH/NCATS UL1 TR001427, the REACT-EU project KITE and enFaced 2.0 (FWF KLI 1044). B. Puladi was funded by the Medical Faculty of the RWTH Aachen University as part of the Clinician Scientist Program. C. Gsaxner was funded by the Advanced Research Opportunities Program from the RWTH Aachen Universit

    'A net for everyone': fully personalized and unsupervised neural networks trained with longitudinal data from a single patient

    Full text link
    With the rise in importance of personalized medicine, we trained personalized neural networks to detect tumor progression in longitudinal datasets. The model was evaluated on two datasets with a total of 64 scans from 32 patients diagnosed with glioblastoma multiforme (GBM). Contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used in this study. For each patient, we trained their own neural network using just two images from different timepoints. Our approach uses a Wasserstein-GAN (generative adversarial network), an unsupervised network architecture, to map the differences between the two images. Using this map, the change in tumor volume can be evaluated. Due to the combination of data augmentation and the network architecture, co-registration of the two images is not needed. Furthermore, we do not rely on any additional training data, (manual) annotations or pre-training neural networks. The model received an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. We show that using data from just one patient can be used to train deep neural networks to monitor tumor change
    corecore